perm filename CHAP2[4,KMC]9 blob
sn#057988 filedate 1973-08-15 generic text, type T, neo UTF8
00100 .SEC EXPLANATIONS AND MODELS
00200 .SS The Nature of Explanation
00300 It is perhaps as difficult to explain explanation itself as
00400 it is to explain anything else. The explanatory practices of
00500 different sciences differ widely but they all share the purpose of
00600 someone attempting to answer someone else's (or his own)
00700 why-how-what-etc. questions about a situation, event, episode, object
00800 or phenomenon. Thus explanation implies a dialogue whose participants
00900 share some interests, beliefs, and values. A consensus must exist
01000 about admissable and appropriate questions and answers. The
01100 participants must agree on what is a sound and reasonable question
01200 and what is a relevant, intelligible, and (believed) correct answer.
01300 The explainer tries to satisfy a questioner's curiosity by making
01400 comprehensible why something is the way it is. The answer may be a
01500 definition, an example, a synonym, a story, a theory, a
01600 model-description, etc. The answer attempts to satisfy curiosity by
01700 settling belief.
01800 .V
01900 Suppose a man dies and a questioner (Q) asks an explainer (E):
02000 .END CONTINUE
02100 Q: Why did the man die?
02200 One answer might be:
02300 .V
02400 E: Because he took cyanide.
02500 .END CONTINUE
02600 This explanation might be sufficient to satisfy Q's curiosity and he
02700 stops asking further questions. Or he might continue:
02800 .V
02900 Q: Why did the cyanide kill him?
03000 .END CONTINUE
03100 and E replies:
03200 .V
03300 E: Anyone who ingests cyanide dies.
03400 .END CONTINUE
03500 This explanation appeals to a universal generalization under which is
03600 subsumed the particular fact of this man's death. Subsumptive
03700 explanations satisfy some questioners but not others who, for
03800 example, might want to know about the physiological mechanisms
03900 involved.
04000 .V
04100 Q: How does cyanide work in causing death?
04200 E: It stops respiration so the person dies from lack of oxygen.
04300 .END CONTINUE
04400 If Q has biochemical interests he might inquire further:
04500 .V
04600 Q:What is cyanide's mechanism of drug action on the
04700 respiratory center?
04800 center?
04900 .END CONTINUE
05000 The last two questions refers to causes. When human action is
05100 to be explained, confusion easily arises between appealing to
05200 physical, mechanical causes and appealing to symbolic-level reasons,
05300 that is, learned, acquired procedures or strategies (Toulmin, 1971).
05400 It is established clinical knowledge that the phenomena of
05500 the paranoid mode can be found associated with a variety of physical
05600 disorders. For example, paranoid thinking can be found in patients
05700 with head injuries, hyperthyroidism, hypothyroidism, uremia,
05800 pernicious anemia, cerebral arteriosclerosis, congestive heart
05900 failure, malaria and epilepsy. Also drug intoxications due to
06000 alcohol, amphetamines, marihuana and LSD can be accompanied by the
06100 paranoid mode. In these cases the paranoid mode is not a first-order
06200 disease but a way of processing information in reaction to some other
06300 underlying disorder. To account for the association of paranoid
06400 thought with these physical states of illness, a psychological
06500 theorist might be tempted to hypothesize that a purposive cognitive
06600 system would attempt to explain a physical illness state by
06700 constructing persecutory beliefs blaming other human agents for the
06800 ill-being of the disease state. But before making such an explanatory
06900 move, we must consider the elusive distinction between reasons and
07000 causes in explanations of human behavior.
07100 One view of the association of the paranoid mode with
07200 physical disorders might be that the physical illness simply causes
07300 the paranoia ,through some unknown mechanism, at a "hardware" level
07400 beyond the influence of deliberate reprogramming and beyond voluntary
07500 self-control. That is, the resultant paranoid mode represents
07600 something that happens to a person as victim, not something that he
07700 does as an active agent. Another view is that the paranoid mode can
07800 be explained in terms of reasons, justifications which describe an
07900 agent's intentions and beliefs. Does a person as an agent
08000 recognize, monitor and control what he is doing or trying to do? Or
08100 does it just happen to him automatically without conscious
08200 deliberation? This question raises a third view, namely that
08300 unrecognized (but potentially recognizable) reasons, aspects of the
08400 program which are sealed off and inacessible to voluntary control,
08500 can function like causes. Once brought to consciousness such reasons
08600 can be modified voluntarily by the agent, as a language user, by
08700 reflexively talking to and instructing himself. This second-order
08800 monitoring and control through language contrasts with an agent's
08900 inability to modify causes which lie beyond the influence of
09000 self-criticism and self-emancipation through internal linguistically
09100 mediated argumentation. Timeworn conundrums about concepts of
09200 free-will, determinism, responsibility, consciousness and the powers
09300 of mental action here plague us unless we stick closely to a computer
09400 analogy which makes a clear and useful distinction between levels of
09500 hardware, interpreter and programs in a self-referent system. (See p.
09600 000 in Chap 2)
09700
09800 Each of these three views provides a serviceable perspective
09900 depending on how a disorder is to be explained and corrected. When
10000 paranoid processes occur during amphetamine intoxication they might
10100 be viewed as biochemically caused and beyond the patient's ability to
10200 control volitionally through internal self-correcting dialogues with
10300 himself. When a paranoid moment occurs in a normal person, it can be
10400 viewed as having a mistinterpretation as a reason. If the paranoid
10500 misinterpretation is recognized as such, a normal person has the
10600 emancipatory power to revise or reject it through internal debate.
10700 Between these extremes of drug-induced paranoid processes and the
10800 self-correctible paranoid moments of the normal person, lie cases of
10900 paranoid personalities paranoid reactions and the paranoid mode
11000 associated with the major psychoses (schizophrenic and
11100 manic-depressive).
11200 One opinion has it that the major psychoses are a consequence
11300 of unknown physical "hardware" causes and are beyond deliberate
11400 voluntary control. But what are we to conclude about paranoid
11500 personalities and paranoid reactions where no hardware disorder is
11600 detectable or suspected? Are such persons to be considered patients
11700 to whom something is mechanically happening or are they agents whose
11800 behavior is a consequence of what they do? Or are they both agent
11900 and patient depending on on how one views the self-modifiability of
12000 their symbolic processing? In these perplexing cases we shall take
12100 the position that in normal, neurotic and characterological paranoid
12200 modes, the psychopathlogy represents something that happens to a man
12300 as a consequence of what he has experientially undergone, of
12400 something he now does, and something he now undergoes. Thus he is
12500 both agent and victim whose symbolic processes have powers to do and
12600 liabilities to undergo. His liabilities are reflexive in that he
12700 is victim to, and can succumb to, his own symbolic structures.
12800
12900 From this standpoint I would postulate a duality between
13000 reasons and causes. That is, a reason can operate as an unrecognized
13100 cause in one context and be offered as a recognized justification in
13200 another. It is, of course, not the reason itself which serves as a
13300 cause but having the reason. Human symbolic behavior is
13400 non-determinate to the extent that it is self-determinate. Thus the
13500 power to make some decisions freely and to change one's mind is
13600 non-illusory. When a reason is recognized to function as a cause
13700 and is accessible to self-monitoring, emancipation from it can occur
13800 through change or rejection of belief. In this sense a two-levelled
13900 system involving an interpreter and its programs is self-changeable
14000 and self-emancipatory, within limits.
14100 Explanations both in terms of causes and reasons can be
14200 indefinitely extended and endless questions can be asked at each
14300 level of analysis. Just as the participants in explanatory dialogues
14400 decide what is taken to be problematic, so they also determine the
14500 termini of questions and answers. Each discipline has its
14600 characteristic stopping points and boundaries.
14700 In the background of explanatory dialogues are larger and
14800 smaller constellations of concepts which are taken for granted as
14900 nonproblematic background. Hence in considering the strategies of
15000 the paranoid mode `it goes without saying', that is, transcending
15100 this particular mode of functioning is the fact that any living
15200 teleonomic system ,as the larger constellation , strives for
15300 maintenance and expansion of life. Also it should go without saying
15400 that, at a lower level, ion transport takes place through nerve-cell
15500 membranes. Every function of an organism can be viewed a governing a
15600 subfunction beneath and depending on a transfunction above which
15700 calls it into play for a purpose.
15800 Just as there are many alternative ways of describing, there
15900 are many alternative ways of explaining. An explanation is geared to
16000 some level of what the dialogue participants take to be the
16100 fundamental structures and processes under consideration. Since in
16200 psychiatry we cope with patients' problems using mainly
16300 symbolic-conceptual techniques,(although it is true that the pill,
16400 the knife, and electricity are also available.), we are interested in
16500 aspects of human conduct which can be explained, understood, and
16600 modified at a symbol-processing level. Psychiatrists need theoretical
16700 symbolic systems from which their clinical experience can be
16800 logically derived to interpret the case histories of their patients.
16900 Otherwise they are faced with mountains of dross and indigestible
17000 data. "Science is an attempt to make the chaotic diversity of our
17100 sense experience correspond to a logically uniform system of thought
17200 by correlating single experiences with the theoretic structure."-
17300 Einstein.
17400
17500 .SS The Symbol Processing Viewpoint
17600
17700 Segments and sequences of human behavior can be looked at
17800 from many standpoints. In this monograph I shall view sequences of
17900 paranoid symbolic behavior from an information processing standpoint.
18000 For a more complete explication and justification of this
18100 symbol-processing view, see Newell (1973) and Newell and Simon (1972).
18200 In brief, information is defined as knowledge in a symbolic
18300 code. A symbolic process is a symbol-manipulating activity posited
18400 to account for observable symbolic behavior such as linguistic
18500 interaction. Symbols are defined as representations of experience
18600 classified as objects, events, situations, and relations.
18700 Symbol-processing explanations postulate an underlying
18800 structure of hypothetical processes, functions, strategies, or
18900 directed symbol-manipulating procedures, having the power to produce
19000 and being responsible for the manifest phenomena. Such a structure
19100 offers an ethogenic (ethos = conduct or character, genic =
19200 generating) explanation for sequences or segments of symbolic
19300 behavior. (See Harre and Secord,1972). In adopting an ethogenic
19400 viewpoint, I shall posit processes, functions, procedures and
19500 strategies as being responsible for and having the power to generate
19600 the symbolic patterns and sequences characteristic of the paranoid
19700 mode. "Strategies" is perhaps the best general term since it
19800 implies ways of obtaining an objective which have suppleness and
19900 pliability since their choice of application depends on
20000 circumstances. However I shall use all these terms
20100 interchangeably.
20200
20300 .SS Symbolic Models
20400 Theories and models share many functions and are often
20500 considered equivalent. One important distinction lies in the fact
20600 that a theory states a subject has a certain structure but does not
20700 exhibit that structure in itself. (See Kaplan,1964). In the case of
20800 interactive simulation models, such as will be described, there
20900 exists a further distinction. Interactive simulation models,
21000 having the ability to converse in natural language using teletypes,
21100 actualize or realize a theory in the form of a dialogue algorithm. In
21200 contrast to a verbal, pictorial or mathematical representation, such
21300 a model changes its states over time and ends up in a state different
21400 from its initial state.
21500 In contrasting description from what is described, Einstein
21600 remarked that it is not the function of science to give the taste of
21700 the soup. But an interactive simulation model which reproduces a
21800 segment of reality does just that, since it offers an interviewer a
21900 first-hand experience with a concrete case. In constructing a
22000 computer simulation, a theory is modelled to discover a sufficiently
22100 rich structure of assumptions to generate the observable behavior
22200 under study. A dialogue algorithm allows an observer to interact
22300 with a concrete specimen of a class in detail. In the case of our
22400 model, the level of detail is the level of the symbolic behavior of
22500 conversational language which is satisfying to a clinician who can
22600 compare the model with human counterparts at his familiar level of
22700 clinical dialogue. Communicating with the paranoid model by means of
22800 teletype, an interviewer can directly experience for himself the type
22900 of impaired social relationship which develops with someone in
23000 paranoid mode.
23100 An algorithm composed of symbolic computational procedures
23200 converts input symbolic structures into output symbolic structures
23300 according to certain principles. The modus operandi of a symbolic
23400 model is simply the workings of an algorithm when run on a computer.
23500 At this level of explanation, to answer `why?' means to provide an
23600 algorithm which makes explicit how symbolic structures collaborate,
23700 interplay and interlock - in short, how they are organized to
23800 generate patterns of manifest phenomena.
23900
24000 To simulate the sequential input-output behavior of a system
24100 using symbolic computational procedures, we write an alogorithm
24200 which, when run on a computer, produces symbolic behavior resembling
24300 that of the subject system being simulated. (Colby,1973) The
24400 resemblance is achieved through the workings of an inner posited
24500 structure in the form of an algorithm, an organization of
24600 symbol-manipulating procedures which are responsible for the
24700 characteristic observable behavior at the input-output level. Since
24800 we do not know the structure of the `real' simulative processes used
24900 by the mind-brain, our posited structure stands as an imagined
25000 theoretical analogue, a possible and plausible organization of
25100 processes analogous to the unknown processes and serving as an
25200 attempt to explain the workings of the system under study. A
25300 simulation model is thus deeper than a pure black-box explanation
25400 because it postulates functionally equivalent processes inside the
25500 box to account for observable patterns of behavior. A simulation
25600 model constitutes an interpretive explanation in that it makes
25700 intelligible the connections between external input, internal states
25800 and output by positing intervening symbol-processing procedures
25900 operating between symbolic input and symbolic output. An
26000 intelligible description of the model should make clear why and how
26100 it reacts as it does under various circumstances.
26200 Citing a universal generalization to explain an individual's
26300 behavior is unsatisfactory to a questioner who is interested in what
26400 powers and liabilities are latent behind manifest phenomena. To say
26500 `x is nasty because x is paranoid and all paranoids are nasty' may be
26600 relevant, intelligible and correct. But another type of explanation
26700 is possible, a model-explanation referring to a structure which can
26800 account for `nasty' behavior as a consequence of input and internal
26900 states of a system. A model explanation specifies particular
27000 antecedants and processes through which antecedants generate the
27100 phenomena. An ethogenic approach to explanation assumes perceptible
27200 phenomena display the regularities and nonrandom irregularities they
27300 do because of the nature of a imperceptible and inaccessible
27400 underlying structure. The posited theoretical structure is an
27500 idealization, unobservable in human heads, not because it is too
27600 small, but because it is imaginary.
27700 When attempts are made to explain human behavior, principles
27800 in addition to those accounting for the natural order are invoked.
27900 "Nature entertains no opinions about us", said Nietzsche, but human
28000 natures do , and therein lies a source of complexity for the
28100 understanding of human conduct. Until the first quarter of the 20th
28200 century, natural sciences have been guided by the Newtonian ideal of
28300 perfect process knowledge about inanimate objects whose behavior can
28400 be subsumed under lawlike generalizations. When a deviation from a
28500 law was noticed,it was the law which was modified, since by
28600 definition physical objects do not have the power to break laws. When
28700 the planet Mercury was observed to deviate from the orbit predicted
28800 by Newtonian theory, no one accused the planet of being an
28900 intentional agent breaking the law; something was incorrect about the
29000 theory. Subsumptive explanation is the acceptable norm in physics
29100 but it is seldom satisfactory in accounting for the behavior of
29200 living purposive systems. In considering the behavior of bodies
29300 falling in a macroscopic world, no one nowadays follows the
29400 Aristotelian pattern of attributing to them intentions to fall . But
29500 in the case of living systems, especially ourselves, our ideal
29600 explanatory practice remains Aristotelian in utilizing a concept of
29700 intention. Aristotle's misconception in physics was to extend to the
29800 macroscopic non-living world an intentionalistic concept of purpose
29900 appropriate to the living world as a principle of intelligibility.
30000 (See Ayala,1972).
30100 Consider a man participating in a high-diving contest. In
30200 falling towards the water he accelerates at the rate of 32 feet per
30300 second. Viewing the man simply as a falling body, we explain his rate
30400 of fall by appealing to a physical law. Viewing the man as a human
30500 intentionalistic agent, we explain his dive as the result of an
30600 intention to dive in a cetain way in order to win the diving contest.
30700 His conduct (in contrast to mere movement) involves an intended
30800 following of certain conventional rules for what is judged by humans
30900 to constitute, say, a swan dive. Suppose part way down he chooses to
31000 change his position in mid-air and enter the water thumbing his nose
31100 at the judges. He cannot break the law of falling bodies but he can
31200 break the rules of diving and make a gesture which expresses
31300 disrespect and which he believes will be interpreted as such by the
31400 onlookers. Our diver breaks a rule for diving but follows another
31500 rule which prescribes gestural action for insulting behavior. To
31600 explain the actions of diving and nose-thumbing, we would appeal, not
31700 to laws of natural order, but to an additional order, to principles
31800 of human order, superimposed on laws of natural order and which take
31900 into account (1)standards of appropriate action in certain situations
32000 and (2) the agent's inner considerations of intention, belief and
32100 value which he finds compelling from his point of view. In this type
32200 of explanation the explanandum, that which is being explained is the
32300 agent's informed actions, not simply his movements. When a human
32400 agent performs an action in a situation, we can ask: is the action
32500 appropriate to that situation and if not, why did the agent believe
32600 his action to be called for.
32700 As will be shown, symbol-processing explanations rely on
32800 concepts of intention, belief, action, affect, etc. These terms are
32900 close to the terms of ordinary language as is characteristic of early
33000 stages of explanations. It is also important to note that such terms
33100 are commonly utilized in describing computer algorithms which strive
33200 to achieve goals. In an algorithm these ordinary terms can be
33300 explicitly defined and represented.
33400 Psychiatry deals with the practical concerns of inappropriate
33500 action, belief, etc. on the part of a patient. His behavior may be
33600 inappropriate to the onlooker since it represents a lapse from the
33700 expected, a contravention of the human order. It may even appear this
33800 way to the patient in monitoring and directing himself. But
33900 sometimes, as in severe cases of the paranoid mode, the patient's
34000 behavior does not appear anomalous to himself. He maintains that
34100 anyone who understands his point of view, who conceptualizes
34200 situations as he does from the inside, would consider his outward
34300 behavior appropriate and justified. What he does not understand or
34400 accept is that his inner conceptualization is mistaken and represents
34500 a misinterpretation of the events of his experience.
34600 The model to be presented in the sequel constitutes an
34700 attempt to explain some regularities and particular occurrences of
34800 symbolic (conversational) paranoid behavior observable in the
34900 clinical situation of a psychiatric interview. The explanation is
35000 at the symbol-processing level of linguistically communicating agents
35100 and is cast in the form of a dialogue algorithm. Like all
35200 explanations it is incomplete and does not claim to represent the
35300 only conceivable structure of processes .
35400
35500 The Nature of Algorithms
35600
35700 Theories can be presented in various forms such as essays,
35800 mathematical equations and computer programs. To date most
35900 theoretical explanations in psychiatry and psychology have consisted
36000 of natural language essays with all their well-known vagueness and
36100 ambiguities. Many of these formulations have been untestable, not
36200 because relevant observations were lacking but because it was unclear
36300 what the essay was really saying. Clarity is needed.
36400 An alternative way of formulating psychological theories is
36500 now available in the form of symbol-processing algorithms, computer
36600 programs, which have the virtue of being clear and explicit in their
36700 articulation and which can be run on a computer to test internal
36800 consistency and external correspondence with the data of observation.
36900 The subject of a model is what it is a model of; the source of a
37000 model is what it is based upon. Since we do not know the `real'
37100 mind-brain algorithms, we construct a theoretical model, bades upon
37200 computer algorithms, which represents a partial analogy. (Harre,
37300 1970). The analogy is made at the symbol- processing level, not at
37400 the hardware level. A functional, computational or procedural
37500 equivalence is being postulated. The question then becomes one of
37600 categorizing the extent of the equivalence. A beginning
37700 (first-approximation) functional equivalence might be defined as
37800 indistinguishability at the level of observable I-O pairs. A
37900 stronger equivalence would consist of indistinguishability at inner
38000 I-O levels. That is, there exists a correspondence between what is
38100 being done and how it is being done at a given level of operations.
38200 An algorithm represents an organization of symbol-processing
38300 strategies or functions which represent an `effective procedure'. It
38400 is essential to grasp this fundamental concept of computer
38500 simulation. An effective procedure consists of two compoments:
38600 .V
38700 (1) A programming language in which procedural rules of
38800 behavior can be rigorously and unambiguously specified.
38900
39000 (2) A machine processor which can rapidly and reliably carry
39100 out the processes specified by the procedural rules.
39200 .END
39300 The specifications of (1), written in a formally defined programming
39400 language, is termed an algorithm or program while (2) involves a
39500 computer as the machine processor, a set of deterministic physical
39600 mechanisms which can perform the operations specified in the
39700 algorithm. The algorithm is called `effective' because it actually
39800 works, performing as intended when run on the machine processor.
39900 A simulation model is composed of procedures analogous to the
40000 real and unknown procedures. We are not claiming they ARE
40100 analogous, we are MAKING them so. The analogy being drawn here is
40200 between specified processes and their generating systems. Thus
40300
40400 .V
40500 mental process computational process
40600 --------------:: ----------------------
40700 brain hardware computer hardware and
40800 and programs programs
40900 .END
41000
41100 Many of the classiclal mind-brain problems arose because we
41200 did not yet have for analogy a familiar example of a system in which
41300 we could make a clear separation between hardware descriptions and
41400 program descriptions. With the advent of computers and programs some
41500 mind-brain perplexities disappear. (Colby,1971). The analogy is not
41600 simply between computer hardware and brain wetware. We are not
41700 comparing the structure of neurons with the structure of
41800 transisitors; we are comparing the organization of symbol-processing
41900 procedures in an algorithm with symbol-processing procedures of the
42000 mind-brain. The central nervous system contains a representation of
42100 the experience of its holder. A model builder has a conceptual
42200 representation of that representation which he demonstrates in the
42300 form of a model. Thus the model is a demonstration of a
42400 representation of a representation.
42500 Since we are taking running computer programs as a source of
42600 analogy for a paranoid model, errors or pathological behavior on the
42700 part of such programs are of interest to the psychopathologist. These
42800 errors can be ascribed to the hardware level, to the interpreter or
42900 to the programs which the interpreter executes. Different remedies
43000 are required at different levels. If the analogy is to be useful in
43100 the case of human pathological behavior, it will become a matter of
43200 influencing symbolic behavior with the appropriate techniques.
43300 Since the algoritm is written in a programming language, it
43400 is hermetic except to a few people, who in general do not enjoy
43500 reading other people's code. Hence the intelligibility and
43600 scrutability requirement for explanations must be met in other ways.
43700 In an attempt to open the model to scrutiny I shall describe the
43800 model in detail using diagrams and interview examples profusely.
43900
44000
44100 Analogy
44200
44300 I have stated that an interactive simulation model of
44400 symbol-manipulating processes reproduces sequences of symbolic
44500 behavior at the level of linguistic communication. The reproduction
44600 is achieved through the operations of an algorithm consisting of an
44700 organization of hypothetical symbol-processing strategies or
44800 procedures which can generate the I-O behavior of the subject-
44900 processes under investigation.The algorithm is be an "effective
45000 procedure" in the sense it really works in the manner intended by the
45100 model-builders. In the model to be described, the paranoid algorithm
45200 generates linguistic I-O behavior typical of patients whose
45300 symbol-processing is dominated by the paranoid mode. Comparisons can
45400 be made between samples of the I-O behaviors of patients and model.
45500 But the analogy is not to be drawn at this level. Mynah birds and
45600 tape recorders also reproduce human linguistic behavior but no one
45700 believes the reproduction is achieved by powers analogous to human
45800 powers. Given that the manifest outermost I-O behavior of the model
45900 is indistinguishable from the manifest outward I/O behavior of
46000 paranoid patients, does this imply that the hypothetical underlying
46100 processes used by the model are analogous to or the same as the
46200 underlying processes used by persons in the paranoid mode? This deep
46300 and far-reaching question should be approached with caution and only
46400 when we are first armed with some clear notions about analogy,
46500 similarity, faithful reproduction, indistinguishability and
46600 functional equivalence.
46700 In comparing two things (objects, systems or processes ) one
46800 can cite properties they have in common,(positive analogy),
46900 properties they do not share (negative analogy) and properties which
47000 we do not yet know whether they are positive or negative (neutral
47100 analogy). (See Hesse,1966). No two things are exactly alike in every
47200 detail. If they were identical in respect to all their properties
47300 then they would be copies. If they were identical in every respect
47400 including their spatio-temporal location we would say we have only
47500 one thing instead of two. Everything resembles something else and
47600 maybe everything else, depending upon how one cites properties.
47700 In an analogy a similarity relation is evoked. "Newton did
47800 not show the cause of the apple falling but he showed a similitude
47900 between the apple and the stars."(D`Arcy Thompson). Huygens suggested
48000 an analogy between sound waves and light waves in order to understand
48100 something less well-understood (light) in terms of something better
48200 understood (sound). To account for species variation, Darwin
48300 postulated a process of natural selection. He constructed an
48400 analogy from two sources, one from artificial selection as practiced
48500 by domestic breeders of animals and one from Malthus' theory of a
48600 competition for existence in a population increasing geometrically
48700 while its resources increase arithmetically. Bohr's model of the atom
48800 offered an analogy between solar system and atom. These well-known
48900 historical examples should be sufficient here to illustrate the role
49000 of analogies in theory construction. Analogies are made in respect
49100 to those properties which constitute the positive and neutral
49200 analogy. The negative analogy is ignored. Thus Bohr's model of
49300 the atom as a miniature planetary system was not intended to suggest
49400 that electrons possessed color or that planets jumped out of their
49500 orbits.
49600
49700 Functional Equivalence
49800
49900 When human symbolic processes are the subject of a simulation
50000 model, we draw from two sources, symbolic computation and psychology.
50100 We propose an analogy between systems known to have the power to
50200 process symbols, namely, persons and computers. The properties
50300 compared in the analogy are obviously not physical or substantive
50400 such as blood and wires, but functional and procedural. We want to
50500 assume that the not-well-understood procedures of thought in a person
50600 are similar to the more accessible and better understood procedures
50700 of symbol-processing which take place in a computer. The analogy is
50800 one of functional or procedural equivalence. (For a further account
50900 of functional analysis see Hempel (1965)). Mousetraps are
51000 functionally equivalent. There exists a large set of physical
51100 mechanisms for catching mice. The term "mousetrap" says what all of
51200 the set has in common. They take as input a live mouse and yield as
51300 output a dead one. Systems equivalent from one point of view may not
51400 be equivalent from another. (Fodor,1968).
51500 If model and human are indistinguishable at the manifest
51600 level of linguistic I-O pairs, then they can be considered equivalent
51700 at that level. If they can be shown to be indistinguishable at
51800 more internal symbolic levels, then a stronger equivalence becomes
51900 achieved. How stringent and how extensive are the demands for
52000 equivalence to be? Must there be point-to-point correspondences at
52100 every level? What is to count as a point and what are the levels?
52200 Procedures can be specified and ostensively pointed to in an
52300 algorithm but how can we point to unobservable symbolic processes in
52400 a person's head? There is an inevitable limit to scrutinizing the
52500 "underlying" processes of the world. Einstein likened this situation
52600 to a man explaining the behavior of a watch without opening it:
52700 "He will never be able to compare his picture with the real mechanism
52800 and he cannot even imagine the possibility or meaning of such a
52900 comparison".
53000 In constructing an algorithm one puts together an
53100 organization of collaborating functions or procedures. A function
53200 takes some symbolic structure as input and yields some symbolic
53300 structure as output. Two computationally equivalent functions, having
53400 the same input and yielding the same output, can differ `inside' the
53500 function at the instruction level.
53600 Consider an elementary programming problem which students in
53700 symbolic computation are often asked to solve. Given a list L of
53800 symbols, L=(A B C D), as input, construct a function or procedure
53900 which will convert this list to the list RL in which the order of the
54000 symbols is reversed, i.e. RL=(D C B A). There are many ways of
54100 solving this problem and the code of one student may differ greatly
54200 from that of another at the level of individual instructions. But the
54300 differences of such details are irrelevant. What is significant is
54400 that the solutions make the required conversion from L to RL. The
54500 correct solutions will all be computationally equivalent at the
54600 input-output level since they take the same symbolic structures as
54700 input and produce the same symbolic output.
54800 If we propose that an algorithm we have constructed is
54900 functionally equivalent to what goes on in humans when they process
55000 symbolic structures, how can we justify this position ?
55100 Indistinguishability tests at, say, the linguistic level provide
55200 evidence only for beginning equivalence. We would like to be able to
55300 have access to the underlying processes in humans the way we can with
55400 algorithms. (Admittedly, we do not directly observe processes but
55500 only their products). The difficulty lies in identifying, making
55600 accessible, and counting processes in human heads. Many
55700 symbol-processing experiments are now being designed and carried out.
55800 We must have great patience with this type of experimental
55900 psychology.
56000 In the meantime, besides first-approximation I-O equivalence
56100 and plausibility arguments, one might appeal to extra-evidential
56200 support offering parallelisms from other scientific domains. One can
56300 offer analogies between what is known to go on at a molecular level
56400 in the cells of living organisms and what goes on in an algorithm.
56500 For example, a DNA molecule in the nucleus of a cell consists of an
56600 ordered sequence (list) of nucleotide bases (symbols) coded in
56700 triplets termed codons (words). Each element of the codon specifies
56800 which amino acid during protein synthesis is to be linked into the
56900 chain of polypeptides making up the protein. The codons function
57000 like instructions in a programming language. Some codons are known to
57100 operate as terminal symbols analogous to symbols in an algorithm
57200 which terminate the end of a list. If, as a result of a mutation, a
57300 stop codon should appear in the middle of a sequence rather than at
57400 its normal terminal position, further protein synthesis is halted.
57500 The polypeptide chain resulting is abnormal and may have lethal or
57600 trivial consequences for the organism depending on what must be
57700 passed on to other processes which require polypeptides to be handed
57800 over to them. Similarly in an algorithm. If a terminating symbol is
57900 incorrect in a procedure, the procedure cannot operate in its
58000 intended manner. Such a result may be lethal or trivial to the
58100 algorithm depending on what information the faulty procedure must
58200 pass on at it interface with other procedures in the overall
58300 organization. Each procedure in an algorithm is embedded in an
58400 organization of collaborating procedures just as is the case of
58500 functions in living organisms. We know that at the molecular level of
58600 living organisms there exists a process such as serial progression
58700 along a nucleotide sequence, which is analogous to stepping down a
58800 list in an algorithm. Further analogies can be made between point
58900 mutations in which DNA codons can be inserted, deleted, substituted
59000 or reordered and symbolic computation in which the same operations
59100 are commonly carried out on symbolic structures. Such analogies are
59200 interesting as extra-evidential support but obviously closer linkages
59300 are needed between the macro-level of symbolic processes and the
59400 micro-level of molecular information-processing within cells.
59500 To obtain evidence for the acceptability of a model,
59600 empirical tests are utilized as validation procedures. Such tests
59700 should also tell us which is the best among alternative versions of a
59800 family of models and among alternative,different families of, models.
59900 Scientific explanations do not stand alone in isolation. They are
60000 evaluated relative to rival contenders for the position of "best
60100 available". Once we accept a theory or model as the best available,
60200 can we be sure it is correct or true? We can never know with
60300 certainty. Theories and models are provisional approximations to
60400 nature destined to become superseded by better ones.